Self-Domain Adaptation for Face Anti-Spoofing
نویسندگان
چکیده
Although current face anti-spoofing methods achieve promising results under intra-dataset testing, they suffer from poor generalization to unseen attacks. Most existing works adopt domain adaptation (DA) or (DG) techniques address this problem. However, the target is often unknown during training which limits utilization of DA methods. DG can conquer by learning invariant features without seeing any data. fail in utilizing information In paper, we propose a self-domain framework leverage unlabeled test data at inference. Specifically, adaptor designed adapt model for domain. order learn better adaptor, meta-learning based algorithm proposed using multiple source domains step. At time, updated only according unsupervised loss further improve performance. Extensive experiments on four public datasets validate effectiveness method.
منابع مشابه
Learn Convolutional Neural Network for Face Anti-Spoofing
Though having achieved some progresses, the hand-crafted texture features, e.g., LBP [23], LBP-TOP [11] are still unable to capture the most discriminative cues between genuine and fake faces. In this paper, instead of designing feature by ourselves, we rely on the deep convolutional neural network (CNN) to learn features of high discriminative ability in a supervised manner. Combined with some...
متن کاملSample-oriented Domain Adaptation for Image Classification
Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i4.16379